13 research outputs found

    UNDERSTANDING USER’S TRUST FORMATION ON MULTI-SIDED E-COMMERCE PLATFORMS

    Get PDF
    With the ever-growing popularity of online shopping, platform environments providing access to products by multiple sellers increasingly attract users. To reduce information asymmetry and enhance user trust, platform actors provide signals such as star reviews to demonstrate their trustworthiness. This work investigates the influence of trust signals from different sources (on the platform itself vs. on external third-party review sites) and for different targets (platform provider vs. seller) on users’ trust formation in multi-sided e-commerce platforms. We conduct a choice-based conjoint analysis based on data from 81 participants. Our results show that users weigh external signals stronger than internal ones when building trust. Also, trust signals for sellers have a higher impact on users’ trust than platform provider signals. Signal discrepancies between internal and external reviews are especially harmful to the platform provider. These insights extend prior knowledge on trust formation and its impacting factors on e-commerce platforms

    Overcoming Innovation Resistance beyond Status Quo Bias - A Decision Support System Approach (Research-in-Progress)

    Get PDF
    When innovative products and services are launched to the market, many consumers initially resist adopting them, even if the innovation is likely to enhance their life quality. Explanations for this behavior can also be found in specific personality traits and in general pitfalls of human decision-making. We believe that decision support systems (DSS) can help alleviate such innovation resistance. We propose a DSS design that addresses innovation resistance to complex innovations on an individual’s cognitive level. An experimental study will be conducted to test the influence of different DSS modifications on the perception and selection of complex innovations. We aim to identify levers for reducing innovation resistance and to derive DSS design implications.

    “May I Help You?”: Exploring the Effect of Individuals’ Self-Efficacy on the Use of Conversational Agents

    Get PDF
    Conversational agents (CAs) increasingly permeate our lives and offer us assistance for a myriad of tasks. Despite promising measurable benefits, CA use remains below expectations. To complement prior technology-focused research, this study takes a user-centric perspective and explores an individual’s characteristics and dispositions as a factor influencing CA use. In particular, we investigate how individuals’ self-efficacy, i.e., their belief in their own skills and abilities, affects their decision to seek assistance from a CA. We present the research model and study design for a laboratory experiment. In the experiment, participants complete two tasks embedded in realistic scenarios including websites with integrated CAs – that they might use for assistance. Initial results confirm the influence of individuals’ self-efficacy beliefs on their decision to use CAs. By taking a human-centric perspective and observing actual behavior, we expect to contribute to CA research by exploring a factor likely to drive CA use

    On the Influence of Cognitive Styles on Users’ Understanding of Explanations

    Get PDF
    Artificial intelligence (AI) is becoming increasingly complex, making it difficult for users to understand how the AI has derived its prediction. Using explainable AI (XAI)-methods, researchers aim to explain AI decisions to users. So far, XAI-based explanations pursue a technology-focused approach—neglecting the influence of users’ cognitive abilities and differences in information processing on the understanding of explanations. Hence, this study takes a human-centered perspective and incorporates insights from cognitive psychology. In particular, we draw on the psychological construct of cognitive styles that describe humans’ characteristic modes of processing information. Applying a between-subject experiment design, we investigate how users’ rational and intuitive cognitive styles affect their objective and subjective understanding of different types of explanations provided by an AI. Initial results indicate substantial differences in users’ understanding depending on their cognitive style. We expect to contribute to a more nuanced view of the interrelation of human factors and XAI design

    What Fits Tim Might Not Fit Tom: Exploring the Impact of User Characteristics on Users’ Experience with Conversational Interaction Modalities

    Get PDF
    Companies increasingly implement conversational agents (CAs), which can be text- or voice-based. While both interaction modalities have different implications for user interaction, it ultimately depends on the users how they perceive these design options. Research indicates that users’ perception and evaluation of information systems is affected by their individual characteristics – their dispositional traits and needs. To investigate the impact of user characteristics on the user experience with text- and voice-based CAs, we draw on task-technology fit (TTF) theory and develop a research design including a lab experiment. We developed and tested two CAs and conducted a pilot study of the experiment. Initial results indicate that user characteristics influence how users perceive the user experience with text- and voice-based CAs. We expect the results of our research to extend TTF theory to the context of conversational interfaces and guide companies in designing their CAs to deliver a satisfying user experience

    Factors that Influence the Adoption of Human-AI Collaboration in Clinical Decision-Making

    Get PDF
    Recent developments in Artificial Intelligence (AI) have fueled the emergence of human-AI collaboration, a setting where AI is a coequal partner. Especially in clinical decision-making, it has the potential to improve treatment quality by assisting overworked medical professionals. Even though research has started to investigate the utilization of AI for clinical decision-making, its potential benefits do not imply its adoption by medical professionals. While several studies have started to analyze adoption criteria from a technical perspective, research providing a human-centered perspective with a focus on AI\u27s potential for becoming a coequal team member in the decision-making process remains limited. Therefore, in this work, we identify factors for the adoption of human-AI collaboration by conducting a series of semi-structured interviews with experts in the healthcare domain. We identify six relevant adoption factors and highlight existing tensions between them and effective human-AI collaboration
    corecore